147 research outputs found
Predictive Performance Of Machine Learning Algorithms For Ore Reserve Estimation In Sparse And Imprecise Data
Thesis (Ph.D.) University of Alaska Fairbanks, 2006Traditional geostatistical estimation techniques have been used predominantly in the mining industry for the purpose of ore reserve estimation. Determination of mineral reserve has always posed considerable challenge to mining engineers due to geological complexities that are generally associated with the phenomenon of ore body formation. Considerable research over the years has resulted in the development of a number of state-of-the-art methods for the task of predictive spatial mapping such as ore reserve estimation. Recent advances in the use of the machine learning algorithms (MLA) have provided a new approach to solve the age-old problem. Therefore, this thesis is focused on the use of two MLA, viz. the neural network (NN) and support vector machine (SVM), for the purpose of ore reserve estimation. Application of the MLA have been elaborated with two complex drill hole datasets. The first dataset is a placer gold drill hole data characterized by high degree of spatial variability, sparseness and noise while the second dataset is obtained from a continuous lode deposit. The application and success of the models developed using these MLA for the purpose of ore reserve estimation depends to a large extent on the data subsets on which they are trained and subsequently on the selection of the appropriate model parameters. The model data subsets obtained by random data division are not desirable in sparse data conditions as it usually results in statistically dissimilar subsets, thereby reducing their applicability. Therefore, an ideal technique for data subdivision has been suggested in the thesis. Additionally, issues pertaining to the optimum model development have also been discussed. To investigate the accuracy and the applicability of the MLA for ore reserve estimation, their generalization ability was compared with the geostatistical ordinary kriging (OK) method. The analysis of Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Error (ME) and the coefficient of determination (R2) as the indices of the model performance indicated that they may significantly improve the predictive ability and thereby reduce the inherent risk in ore reserve estimation
Guaranteed Conformance of Neurosymbolic Models to Natural Constraints
Deep neural networks have emerged as the workhorse for a large section of
robotics and control applications, especially as models for dynamical systems.
Such data-driven models are in turn used for designing and verifying autonomous
systems. This is particularly useful in modeling medical systems where data can
be leveraged to individualize treatment. In safety-critical applications, it is
important that the data-driven model is conformant to established knowledge
from the natural sciences. Such knowledge is often available or can often be
distilled into a (possibly black-box) model . For instance, the unicycle
model for an F1 racing car. In this light, we consider the following problem -
given a model and state transition dataset, we wish to best approximate the
system model while being bounded distance away from . We propose a method to
guarantee this conformance. Our first step is to distill the dataset into few
representative samples called memories, using the idea of a growing neural gas.
Next, using these memories we partition the state space into disjoint subsets
and compute bounds that should be respected by the neural network, when the
input is drawn from a particular subset. This serves as a symbolic wrapper for
guaranteed conformance. We argue theoretically that this only leads to bounded
increase in approximation error; which can be controlled by increasing the
number of memories. We experimentally show that on three case studies (Car
Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models
conform to specified models (each encoding various constraints) with
order-of-magnitude improvements compared to the augmented Lagrangian and
vanilla training methods
Guaranteed Conformance of Neurosymbolic Models to Natural Constraints
Deep neural networks have emerged as the workhorse for a large section of robotics and control applications, especially as models for dynamical systems. Such data-driven models are in turn used for designing and verifying autonomous systems. This is particularly useful in modeling medical systems where data can be leveraged to individualize treatment. In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences. Such knowledge is often available or can often be distilled into a (possibly black-box) model M. For instance, the unicycle model for an F1 racing car. In this light, we consider the following problem - given a model M and state transition dataset, we wish to best approximate the system model while being bounded distance away from M. We propose a method to guarantee this conformance. Our first step is to distill the dataset into few representative samples called memories, using the idea of a growing neural gas. Next, using these memories we partition the state space into disjoint subsets and compute bounds that should be respected by the neural network, when the input is drawn from a particular subset. This serves as a symbolic wrapper for guaranteed conformance. We argue theoretically that this only leads to bounded increase in approximation error; which can be controlled by increasing the number of memories. We experimentally show that on three case studies (Car Model, Drones, and Artificial Pancreas), our constrained neurosymbolic models conform to specified M models (each encoding various constraints) with order-of-magnitude improvements compared to the augmented Lagrangian and vanilla training methods
Recommended from our members
Design of a Customized Multi-Directional Layered Deposition System Based on Part Geometry
Multi-Direction Layered Deposition (MDLD) reduces the need for supports by depositing
on a part along multiple directions. This requires the design of a new mechanism to reorient the part, such that the deposition head can approach from different orientations.
We present a customized compliant parallel kinematic machine design configured to
deposit a set of part geometries. Relationships between the process planning for the
MDLD of a part geometry and considerations in the design of the customized machine
mechanism are illustrated. MDLD process planning is based on progressive part
decomposition and kinematic machine design uses dual number algebra and screw
theory.Mechanical Engineerin
Memory-Consistent Neural Networks for Imitation Learning
Imitation learning considerably simplifies policy synthesis compared to
alternative approaches by exploiting access to expert demonstrations. For such
imitation policies, errors away from the training samples are particularly
critical. Even rare slip-ups in the policy action outputs can compound quickly
over time, since they lead to unfamiliar future states where the policy is
still more likely to err, eventually causing task failures. We revisit simple
supervised ``behavior cloning'' for conveniently training the policy from
nothing more than pre-recorded demonstrations, but carefully design the model
class to counter the compounding error phenomenon. Our ``memory-consistent
neural network'' (MCNN) outputs are hard-constrained to stay within clearly
specified permissible regions anchored to prototypical ``memory'' training
samples. We provide a guaranteed upper bound for the sub-optimality gap induced
by MCNN policies. Using MCNNs on 9 imitation learning tasks, with MLP,
Transformer, and Diffusion backbones, spanning dexterous robotic manipulation
and driving, proprioceptive inputs and visual inputs, and varying sizes and
types of demonstration data, we find large and consistent gains in performance,
validating that MCNNs are better-suited than vanilla deep neural networks for
imitation learning applications. Website:
https://sites.google.com/view/mcnn-imitationComment: 22 pages (9 main pages
Exploring with Sticky Mittens: Reinforcement Learning with Expert Interventions via Option Templates
Long horizon robot learning tasks with sparse rewards pose a significant challenge for current reinforcement learning algorithms. A key feature enabling humans to learn challenging control tasks is that they often receive expert intervention that enables them to understand the high-level structure of the task before mastering low-level control actions. We propose a framework for leveraging expert intervention to solve long-horizon reinforcement learning tasks. We consider option templates, which are specifications encoding a potential option that can be trained using reinforcement learning. We formulate expert intervention as allowing the agent to execute option templates before learning an implementation. This enables them to use an option, before committing costly resources to learning it. We evaluate our approach on three challenging reinforcement learning problems, showing that it outperforms state-of-the-art approaches by two orders of magnitude
Slow light, open cavity formation, and large longitudinal electric field on slab waveguide made of indefinite permittivity materials
The optical properties of slab waveguides made of indefinite permittivity
(\vep) materials (IEM) are considered. In this medium the transverse
permittivity is negative while the longitudinal permittivity is positive. At
any given frequency the waveguide supports an infinite number of transverse
magnetic (TM) eigenmodes. For a slab waveguide with a fixed thickness, at most
only one TM mode is forward-wave. The rest of them are backward waves which can
have very large phase index. At a critical thickness, the waveguide supports
degenerate forward- and backward-wave modes with zero group velocity. Above the
critical thickness, the waveguide supports complex-conjugate decay modes
instead of propagating modes. The presence of loss in IEMs will lift the
degeneracy, resulting in modes with finite group velocity. Feasible realization
is proposed. The performance of IEM waveguide is analyzed and possible
applications are discussed which are supported by numerical calculations. These
slab waveguides can be used to make optical delay lines in optical buffers to
slow down and trap light, to form open cavities, to generate strong
longitudinal electric fields, and as phase shifters in optical integrated
circuits.Comment: 14 figures, 12 pages in RevTex
Evolution of spiral and scroll waves of excitation in a mathematical model of ischaemic border zone
Abnormal electrical activity from the boundaries of ischemic cardiac tissue
is recognized as one of the major causes in generation of ischemia-reperfusion
arrhythmias. Here we present theoretical analysis of the waves of electrical
activity that can rise on the boundary of cardiac cell network upon its
recovery from ischaemia-like conditions. The main factors included in our
analysis are macroscopic gradients of the cell-to-cell coupling and cell
excitability and microscopic heterogeneity of individual cells. The interplay
between these factors allows one to explain how spirals form, drift together
with the moving boundary, get transiently pinned to local inhomogeneities, and
finally penetrate into the bulk of the well-coupled tissue where they reach
macroscopic scale. The asymptotic theory of the drift of spiral and scroll
waves based on response functions provides explanation of the drifts involved
in this mechanism, with the exception of effects due to the discreteness of
cardiac tissue. In particular, this asymptotic theory allows an extrapolation
of 2D events into 3D, which has shown that cells within the border zone can
give rise to 3D analogues of spirals, the scroll waves. When and if such scroll
waves escape into a better coupled tissue, they are likely to collapse due to
the positive filament tension. However, our simulations have shown that such
collapse of newly generated scrolls is not inevitable and that under certain
conditions filament tension becomes negative, leading to scroll filaments to
expand and multiply leading to a fibrillation-like state within small areas of
cardiac tissue.Comment: 26 pages, 13 figures, appendix and 2 movies, as accepted to PLoS ONE
2011/08/0
Process flowsheet analysis of pervaporation-based hybrid processes in the production of ethyl tert-butyl ether
BACKGROUND
The manufacturing process of ethyl tert-butyl ether (ETBE) involves the separation of ETBE, mixed C4 hydrocarbons and unreacted ethanol. Unfortunately, the unreacted ethanol forms azeotropic mixtures with ETBE that are difficult to separate by distillation. One of the alternative methods to overcome this limitation is the application of hybrid distillation–pervaporation processes with alcohol-selective membranes.
RESULTS
Simulation tasks were carried out with the process simulation software Aspen Plus and the results of alternative process flowsheets that result from the relative location of the separation technologies (for a target product purity) have been compared on the basis of the required membrane area and energy consumption. Thus, in the case study analyzed seven pervaporation modules located on a sidestream withdrawal, with a total membrane area of 210 m2, are required to obtain 6420 kg h−1 of ETBE with a purity of 95.2 wt%. The retentate stream is returned to the column while the permeate stream, with a high ethanol content, is recycled back to feed the reactors
CONCLUSION
Incorporating pervaporation modules in the process flowsheet for production of ETBE allows unloading of the main separation unit (debutanizer column), thereby reducing energy consumption and operating costs and increasing throughput.Financial support from the Spanish Ministry of Science under the projects CTM2013-44081-R (MINECO, Spain-FEDER 2014–2020), CTQ2015-66078-R and CTQ2016-75158-R is gratefully acknowledged. Adham Norkobilov also thanks the SILKROUTE Project for a PhD scholarship funded by the European Commission through the Erasmus Mundus Action 2 Programme
Science with the Daksha High Energy Transients Mission
We present the science case for the proposed Daksha high energy transients
mission. Daksha will comprise of two satellites covering the entire sky from
1~keV to ~MeV. The primary objectives of the mission are to discover and
characterize electromagnetic counterparts to gravitational wave source; and to
study Gamma Ray Bursts (GRBs). Daksha is a versatile all-sky monitor that can
address a wide variety of science cases. With its broadband spectral response,
high sensitivity, and continuous all-sky coverage, it will discover fainter and
rarer sources than any other existing or proposed mission. Daksha can make key
strides in GRB research with polarization studies, prompt soft spectroscopy,
and fine time-resolved spectral studies. Daksha will provide continuous
monitoring of X-ray pulsars. It will detect magnetar outbursts and high energy
counterparts to Fast Radio Bursts. Using Earth occultation to measure source
fluxes, the two satellites together will obtain daily flux measurements of
bright hard X-ray sources including active galactic nuclei, X-ray binaries, and
slow transients like Novae. Correlation studies between the two satellites can
be used to probe primordial black holes through lensing. Daksha will have a set
of detectors continuously pointing towards the Sun, providing excellent hard
X-ray monitoring data. Closer to home, the high sensitivity and time resolution
of Daksha can be leveraged for the characterization of Terrestrial Gamma-ray
Flashes.Comment: 19 pages, 7 figures. Submitted to ApJ. More details about the mission
at https://www.dakshasat.in
- …